Learning Convex Combinations of Continuously Parameterized Basic Kernels

نویسندگان

  • Andreas Argyriou
  • Charles A. Micchelli
  • Massimiliano Pontil
چکیده

We study the problem of learning a kernel which minimizes a regularization error functional such as that used in regularization networks or support vector machines. We consider this problem when the kernel is in the convex hull of basic kernels, for example, Gaussian kernels which are continuously parameterized by a compact set. We show that there always exists an optimal kernel which is the convex combination of at most m + 1 basic kernels, where m is the sample size, and provide a necessary and sufficient condition for a kernel to be optimal. The proof of our results is constructive and leads to a greedy algorithm for learning the kernel. We discuss the properties of this algorithm and present some preliminary numerical simulations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Low-Rank Kernel Learning for Regression

We present a novel approach to learn a kernelbased regression function. It is based on the use of conical combinations of data-based parameterized kernels and on a new stochastic convex optimization procedure of which we establish convergence guarantees. The overall learning procedure has the nice properties that a) the learned conical combination is automatically designed to perform the regres...

متن کامل

Two-Layer Multiple Kernel Learning

Multiple Kernel Learning (MKL) aims to learn kernel machines for solving a real machine learning problem (e.g. classification) by exploring the combinations of multiple kernels. The traditional MKL approach is in general “shallow” in the sense that the target kernel is simply a linear (or convex) combination of some base kernels. In this paper, we investigate a framework of Multi-Layer Multiple...

متن کامل

Learning the Kernel Function via Regularization

We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a real-valued function by regularization. We establish for a wide variety of regularization functionals that this leads to a convex optimization problem and, for square loss regularization, we characterize the solution of this problem. We show that, although K may be an uncountable set, the ...

متن کامل

Kernel Selection for Semi-Supervised Kernel Machines

Existing semi-supervised learning methods are mostly based on either the cluster assumption or the manifold assumption. In this paper, we propose an integrated regularization framework for semi-supervised kernel machines by incorporating both the cluster assumption and the manifold assumption. Moreover, it supports kernel learning in the form of kernel selection. The optimization problem involv...

متن کامل

Learning with Infinitely Many Kernels via Semi-infinite Programming

In recent years, learning methods are desirable because of their reliability and efficiency in real-world problems. We propose a novel method to find infinitely many kernel combinations for learning problems with the help of infinite and semi-infinite optimization regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogene...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005